141 research outputs found

    Automated Website Fingerprinting through Deep Learning

    Full text link
    Several studies have shown that the network traffic that is generated by a visit to a website over Tor reveals information specific to the website through the timing and sizes of network packets. By capturing traffic traces between users and their Tor entry guard, a network eavesdropper can leverage this meta-data to reveal which website Tor users are visiting. The success of such attacks heavily depends on the particular set of traffic features that are used to construct the fingerprint. Typically, these features are manually engineered and, as such, any change introduced to the Tor network can render these carefully constructed features ineffective. In this paper, we show that an adversary can automate the feature engineering process, and thus automatically deanonymize Tor traffic by applying our novel method based on deep learning. We collect a dataset comprised of more than three million network traces, which is the largest dataset of web traffic ever used for website fingerprinting, and find that the performance achieved by our deep learning approaches is comparable to known methods which include various research efforts spanning over multiple years. The obtained success rate exceeds 96% for a closed world of 100 websites and 94% for our biggest closed world of 900 classes. In our open world evaluation, the most performant deep learning model is 2% more accurate than the state-of-the-art attack. Furthermore, we show that the implicit features automatically learned by our approach are far more resilient to dynamic changes of web content over time. We conclude that the ability to automatically construct the most relevant traffic features and perform accurate traffic recognition makes our deep learning based approach an efficient, flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System Security (NDSS 2018

    Frictionless Authentication Systems: Emerging Trends, Research Challenges and Opportunities

    Get PDF
    Authentication and authorization are critical security layers to protect a wide range of online systems, services and content. However, the increased prevalence of wearable and mobile devices, the expectations of a frictionless experience and the diverse user environments will challenge the way users are authenticated. Consumers demand secure and privacy-aware access from any device, whenever and wherever they are, without any obstacles. This paper reviews emerging trends and challenges with frictionless authentication systems and identifies opportunities for further research related to the enrollment of users, the usability of authentication schemes, as well as security and privacy trade-offs of mobile and wearable continuous authentication systems.Comment: published at the 11th International Conference on Emerging Security Information, Systems and Technologies (SECURWARE 2017

    PIVOT:Private and effective contact tracing

    Get PDF
    We propose, design, and evaluate PIVOT, a privacy-enhancing and effective contact tracing solution that aims to strike a balance between utility and privacy: one that does not collect sensitive information yet allowing effective tracing and notifying the close contacts of diagnosed users. PIVOT requires a considerably lower degree of trust in the entities involved compared to centralised alternatives while retaining the necessary utility. To protect users\u27 privacy, it uses local proximity tracing based on broadcasting and recording constantly changing anonymous public keys via short-range communication. These public keys are used to establish a shared secret key between two people in close contact. The three keys (i.e., the two public keys and the established shared key) are then used to generate two unique per-user-per-contact hashes: one for infection registration and one for exposure score query. These hashes are never revealed to the public. To improve utility, user exposure score computation is performed centrally, which provides health authorities with minimal, yet insightful and actionable data. Data minimisation is achieved by the use of per-user-per-contact hashes and by enforcing role separation: the health authority act as a mixing node, while the matching between reported and queried hashes is outsourced to a third entity, an independent matching service. This separation ensures that out-of-scope information, such as users\u27 social interactions, is hidden from the health authorities, whereas the matching service does not learn users\u27 sensitive information. To sustain our claims, we conduct a practical evaluation that encompasses anonymity guarantees and energy requirements

    K8-Scalar: a workbench to compare autoscalers for container-orchestrated services (Artifact)

    Get PDF
    This artifact is an easy-to-use and extensible workbench exemplar, named K8-Scalar, which allows researchers to implement and evaluate different self-adaptive approaches to autoscaling container-orchestrated services. The workbench is based on Docker, a popular technology for easing the deployment of containerized software that also has been positioned as an enabler for reproducible research. The workbench also relies on a container orchestration framework: Kubernetes (K8s), the de-facto industry standard for orchestration and monitoring of elastically scalable container-based services. Finally, it integrates and extends Scalar, a generic testbed for evaluating the scalability of large-scale systems with support for evaluating the performance of autoscalers for database clusters. The associated scholarly paper presents (i) the architecture and implementation of K8-Scalar and how a particular autoscaler can be plugged in, (ii) sketches the design of a Riemann-based autoscaler for database clusters, (iii) illustrates how to design, setup and analyze a series of experiments to configure and evaluate the performance of this autoscaler for a particular database (i.e., Cassandra) and a particular workload type, (iv) and validates the effectiveness of K8-scalar as a workbench for accurately comparing the performance of different auto-scaling strategies. Future work includes extending K8-Scalar with an improved research data management repository

    Special Issue: Big Data for context-aware applications and intelligent environments

    Get PDF
    Disruptive paradigm shifts such as the Internet of Things (IoT) and Cyber-Physical Systems (CPS) are creating a wealth of streaming context information. Large-scale context-awareness combining IoT and Big Data drive the creation of smarter application ecosystems in diverse vertical domains, including smart health, finance, smart grids and cities, transportation, Industry 4.0, etc. This special issue addresses core topics on the design, the use and the evaluation of Big Data enabling technologies to build next-generation context-aware applications and computing systems for future intelligent environments.status: publishe
    corecore